10 research outputs found
On Model-Based RIP-1 Matrices
The Restricted Isometry Property (RIP) is a fundamental property of a matrix
enabling sparse recovery. Informally, an m x n matrix satisfies RIP of order k
in the l_p norm if ||Ax||_p \approx ||x||_p for any vector x that is k-sparse,
i.e., that has at most k non-zeros. The minimal number of rows m necessary for
the property to hold has been extensively investigated, and tight bounds are
known. Motivated by signal processing models, a recent work of Baraniuk et al
has generalized this notion to the case where the support of x must belong to a
given model, i.e., a given family of supports. This more general notion is much
less understood, especially for norms other than l_2. In this paper we present
tight bounds for the model-based RIP property in the l_1 norm. Our bounds hold
for the two most frequently investigated models: tree-sparsity and
block-sparsity. We also show implications of our results to sparse recovery
problems.Comment: Version 3 corrects a few errors present in the earlier version. In
particular, it states and proves correct upper and lower bounds for the
number of rows in RIP-1 matrices for the block-sparse model. The bounds are
of the form k log_b n, not k log_k n as stated in the earlier versio
Sparse recovery with partial support knowledge
14th International Workshop, APPROX 2011, and 15th International Workshop, RANDOM 2011, Princeton, NJ, USA, August 17-19, 2011. ProceedingsThe goal of sparse recovery is to recover the (approximately) best k-sparse approximation [ˆ over x] of an n-dimensional vector x from linear measurements Ax of x. We consider a variant of the problem which takes into account partial knowledge about the signal. In particular, we focus on the scenario where, after the measurements are taken, we are given a set S of size s that is supposed to contain most of the “large” coefficients of x. The goal is then to find [ˆ over x] such that [ ||x-[ˆ over x]|| [subscript p] ≤ C min ||x-x'||[subscript q]. [over] k-sparse x' [over] supp (x') [c over _] S]
We refer to this formulation as the sparse recovery with partial support knowledge problem ( SRPSK ). We show that SRPSK can be solved, up to an approximation factor of C = 1 + ε, using O( (k/ε) log(s/k)) measurements, for p = q = 2. Moreover, this bound is tight as long as s = O(εn / log(n/ε)). This completely resolves the asymptotic measurement complexity of the problem except for a very small range of the parameter s.
To the best of our knowledge, this is the first variant of (1 + ε)-approximate sparse recovery for which the asymptotic measurement complexity has been determined.Space and Naval Warfare Systems Center San Diego (U.S.) (Contract N66001-11-C-4092)David & Lucile Packard Foundation (Fellowship)Center for Massive Data Algorithmics (MADALGO)National Science Foundation (U.S.) (Grant CCF-0728645)National Science Foundation (U.S.) (Grant CCF-1065125
Moments of unconditional logarithmically concave vectors
We derive two-sided bounds for moments of linear combinations of coordinates
od unconditional log-concave vectors. We also investigate how well moments of
such combinations may be approximated by moments of Gaussian random variables.Comment: 14 page
On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation
We study classic streaming and sparse recovery problems using deterministic
linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the
latter also being known as l1-heavy hitters), norm estimation, and approximate
inner product. We focus on devising a fixed matrix A in R^{m x n} and a
deterministic recovery/estimation procedure which work for all possible input
vectors simultaneously. Our results improve upon existing work, the following
being our main contributions:
* A proof that linf/l1 sparse recovery and inner product estimation are
equivalent, and that incoherent matrices can be used to solve both problems.
Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log
n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms
by making use of the Fast Johnson-Lindenstrauss transform. Both our running
times and number of measurements improve upon previous work. We can also obtain
better error guarantees than previous work in terms of a smaller tail of the
input vector.
* A new lower bound for the number of linear measurements required to solve
l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are
required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where
x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude.
* A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of
measurements required to solve deterministic norm estimation, i.e., to recover
|x|_2 +/- eps|x|_1.
For all the problems we study, tight bounds are already known for the
randomized complexity from previous work, except in the case of l1/l1 sparse
recovery, where a nearly tight bound is known. Our work thus aims to study the
deterministic complexities of these problems
The LIL for -statistics in Hilbert spaces
We give necessary and sufficient conditions for the (bounded) law of the
iterated logarithm for -statistics in Hilbert spaces. As a tool we also
develop moment and tail estimates for canonical Hilbert-space valued
-statistics of arbitrary order, which are of independent interest